skip to main content


Search for: All records

Creators/Authors contains: "Baker, Ryan S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In past work, time management interventions involving prompts, alerts, and planning tools have successfully nudged students in online courses, leading to higher engagement and improved performance. However, few studies have investigated the effectiveness of these interventions over time, understanding if the effectiveness maintains or changes based on dosage (i.e., how often an intervention is provided). In the current study, we conducted a randomized controlled trial to test if the effect of a time management intervention changes over repeated use. Students at an online computer science course were randomly assigned to receive interventions based on two schedules (i.e., high-dosage vs. low-dosage). We ran a two-way mixed ANOVA, comparing students' assignment start time and performance across several weeks. Unexpectedly, we did not find a significant main effect from the use of the intervention, nor was there an interaction effect between the use of the intervention and week of the course. 
    more » « less
    Free, publicly-accessible full text available July 20, 2024
  2. Massive Open Online Courses (MOOCs) have increased the accessibility of quality educational content to a broader audience across a global network. They provide access for students to material that would be difficult to obtain locally, and an abundance of data for educational researchers. Despite the international reach of MOOCs, however, the majority of MOOC research does not account for demographic differences relating to the learners' country of origin or cultural background, which have been shown to have implications on the robustness of predictive models and interventions. This paper presents an exploration into the role of nation-level metrics of culture, happiness, wealth, and size on the generalizability of completion prediction models across countries. The findings indicate that various dimensions of culture are predictive of cross-country model generalizability. Specifically, learners from indulgent, collectivist, uncertainty-accepting, or short-term oriented, countries produce more generalizable predictive models of learner completion. 
    more » « less
    Free, publicly-accessible full text available July 20, 2024
  3. null (Ed.)
  4. Abstract

    Automated, data‐driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context‐specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well‐served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.

     
    more » « less
  5. Sensor-free affect detectors can detect student affect using their activities within intelligent tutoring systems or other online learning environments rather than using sensors. This technology has made affect detection more scalable and less invasive. However, existing detectors are either interpretable but less accurate (e.g., classical algorithms such as logistic regression) or more accurate but uninterpretable (e.g., neural networks). We investigate the use of a new type of neural networks that are monotonic after the first layer for affect detection that can strike a balance between accuracy and interpretability. Results on a real world student affect dataset show that monotonic neural networks achieve comparable detection accuracy to their non-monotonic counterparts while offering some level of interpretability. 
    more » « less
  6. Classroom and lab-based research have shown the advantages of exposing students to a variety of problems with format differences between them, compared to giving students problem sets with a single problem format. In this paper, we investigate whether this approach can be effectively deployed in an intelligent tutoring system, which affords the opportunity to automatically generate and adapt problem content for practice and assessment purposes. We conducted a randomized controlled trial to compare students who practiced problems based on a single template to students who practiced problems based on multiple templates within the same intelligent tutoring system. No conclusive evidence was found for differences in the two conditions on students’ post-test performance and hint request behavior. However, students who saw multiple templates spent more time answering practice items compared to students who solved problems of a single structure, making the same degree of progress but taking longer to do so. 
    more » « less